Source Characterization of Atmospheric Releases using Quasi-Random Sampling and Regularized Gradient Optimization
نویسندگان
چکیده
In the present work, an inversion technique to solve the atmospheric source characterization problem is described. The inverse problem comprises characterizing the source (x, y and z coordinates and the source strength) and the meteorological conditions (wind speed and wind direction) at the source, given certain receptor locations and the concentration values at these receptor locations. A simple Gaussian plume dispersion model for continuous point releases has been adopted as the forward model. The solution methodology for this nonlinear inverse problem consists of Qausi-Monte Carlo (QMC) sampling of the model parameter space and the subsequent application of gradient optimization. The purpose of conducting QMC sampling is to provide the gradient scheme a good initial iterate to converge to the final solution. A new misfit functional that computes the L∞-norm of the ratio of the observed and predicted data has been developed and was used in the QMC search stage. It has been demonstrated that the misfit functional developed, guides the inversion algorithm to the global minimum. Quasi-random sampling was performed using the Hammersley point-set in its original, scrambled and randomized form. Its performance was evaluated against the Mersenne-Twister uniform pseudo-random number generator in terms of the speed and quality of the initial iterate provided. Regularized Newton′s method with quadratic line-search was employed for gradient optimization. The standard Tikhonov stabilizing functional was used for regularization and the regularization parameter was updated adaptively during inversion. The proposed approach has been validated against both synthetic and field experiment data. Results obtained indicate that the proposed approach performs exceedingly well for inverse-source problems with the Gaussian dispersion equation as the forward operator. Also, the work presented highlights the advantages of using deterministic low-discrepancy sampling compared to the conventional pseudo-random sampling to solve the source-inversion problem. Source Characterization of Atmospheric Releases using QuasiRandom Sampling and Regularized Gradient Optimization Bhagirath Addepalli, Christopher Sikorski and Eric R. Pardyjak Department of Mechanical Engineering, University of Utah School of Computing, University of Utah
منابع مشابه
Source characterization of atmospheric releases using stochastic search and regularized gradient optimization
Source characterization of atmospheric releases using stochastic search and regularized gradient optimization B. Addepalli a , K. Sikorski b , E.R. Pardyjak a & M.S. Zhdanov c a Department of Mechanical Engineering, University of Utah, Salt Lake City 84112, USA b School of Computing, University of Utah, Salt Lake City 84112, USA c Department of Geology and Geophysics, University of Utah, Salt L...
متن کاملQuasi-Monte Carlo, Monte Carlo, and regularized gradient optimization methods for source characterization of atmospheric releases
An inversion technique comprising stochastic search and regularized gradient optimization was developed to solve the atmospheric source characterization problem. The inverse problem comprises retrieving the spatial coordinates, source strength, and the wind speed and wind direction at the source, given certain receptor locations and concentration values at these receptor locations. The Gaussian...
متن کاملProximal Quasi-Newton for Computationally Intensive L1-regularized M-estimators
We consider the class of optimization problems arising from computationally intensive `1-regularized M -estimators, where the function or gradient values are very expensive to compute. A particular instance of interest is the `1-regularized MLE for learning Conditional Random Fields (CRFs), which are a popular class of statistical models for varied structured prediction problems such as sequenc...
متن کاملNew Quasi-Newton Optimization Methods for Machine Learning
This thesis develops new quasi-Newton optimization methods that exploit the wellstructured functional form of objective functions often encountered in machine learning, while still maintaining the solid foundation of the standard BFGS quasi-Newton method. In particular, our algorithms are tailored for two categories of machine learning problems: (1) regularized risk minimization problems with c...
متن کاملAlgorithms (X, sigma, eta): Quasi-random Mutations for Evolution Strategies
DRAFT OF A PAPER PUBLISHED IN EA’2005 : Anne Auger, Mohamed Jebalia, Olivier Teytaud. (X,sigma,eta) : quasi-random mutations for Evolution Strategies. Proceedings of Evolutionary Algorihtms’2005, 12 pages. Randomization is an efficient tool for global optimization. We here define a method which keeps : – the order 0 of evolutionary algorithms (no gradient) ; – the stochastic aspect of evolution...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2009